Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Sikkandar, Mohamed Yacin (Ed.)In various biological systems, analyzing how cell behaviors are coordinated over time would enable a deeper understanding of tissue-scale response to physiologic or superphysiologic stimuli. Such data is necessary for establishing both normal tissue function and the sequence of events after injury that lead to chronic disease. However, collecting and analyzing these large datasets presents a challenge—such systems are time-consuming to process, and the overwhelming scale of data makes it difficult to parse overall behaviors. This problem calls for an analysis technique that can quickly provide an overview of the groups present in the entire system and also produce meaningful categorization of cell behaviors. Here, we demonstrate the application of an unsupervised method—the Variational Autoencoder (VAE)—to learn the features of cells in cartilage tissue after impact-induced injury and identify meaningful clusters of chondrocyte behavior. This technique quickly generated new insights into the spatial distribution of specific cell behavior phenotypes and connected specific peracute calcium signaling timeseries with long term cellular outcomes, demonstrating the value of the VAE technique.more » « less
-
We develop information-geometric techniques to analyze the trajectories of the predictions of deep networks during training. By examining the underlying high-dimensional probabilistic models, we reveal that the training process explores an effectively low-dimensional manifold. Networks with a wide range of architectures, sizes, trained using different optimization methods, regularization techniques, data augmentation techniques, and weight initializations lie on the same manifold in the prediction space. We study the details of this manifold to find that networks with different architectures follow distinguishable trajectories, but other factors have a minimal influence; larger networks train along a similar manifold as that of smaller networks, just faster; and networks initialized at very different parts of the prediction space converge to the solution along a similar manifold.more » « less
-
We develop information geometric techniques to understand the representations learned by deep networks when they are trained on different tasks using supervised, meta-, semi-supervised and con- trastive learning. We shed light on the following phenomena that relate to the structure of the space of tasks: (1) the manifold of probabilistic models trained on different tasks using different represen- tation learning methods is effectively low-dimen- sional; (2) supervised learning on one task results in a surprising amount of progress even on seem- ingly dissimilar tasks; progress on other tasks is larger if the training task has diverse classes; (3) the structure of the space of tasks indicated by our analysis is consistent with parts of the Word- net phylogenetic tree; (4) episodic meta-learning algorithms and supervised learning traverse differ- ent trajectories during training but they fit similar models eventually; (5) contrastive and semi-su- pervised learning methods traverse trajectories similar to those of supervised learning. We use classification tasks constructed from the CIFAR- 10 and Imagenet datasets to study these phenom- ena. Code is available at https://github.com/grasp- lyrl/picture of space of tasks.more » « less
An official website of the United States government

Full Text Available